9 research outputs found

    Challenges and opportunities of deep learning models for machinery fault detection and diagnosis: a review

    Get PDF
    In the age of industry 4.0, deep learning has attracted increasing interest for various research applications. In recent years, deep learning models have been extensively implemented in machinery fault detection and diagnosis (FDD) systems. The deep architecture's automated feature learning process offers great potential to solve problems with traditional fault detection and diagnosis (TFDD) systems. TFDD relies on manual feature selection, which requires prior knowledge of the data and is time intensive. However, the high performance of deep learning comes with challenges and costs. This paper presents a review of deep learning challenges related to machinery fault detection and diagnosis systems. The potential for future work on deep learning implementation in FDD systems is briefly discussed

    Low-speed bearing fault diagnosis based on ArSSAE model using acoustic emission and vibration signals

    Get PDF
    The development of rolling element bearing fault diagnosis systems has attracted a great deal of attention due to bearing components having a high tendency toward unexpected failures. However, under low-speed operating conditions, the diagnosis of bearing components remains a problem. In this paper, the adaptive resilient stacked sparse autoencoder (ArSSAE) is proposed to compensate for the shortcomings of conventional fault diagnosis systems at low speed. The efficiency of the proposed ArSSAE model is initially assessed using the CWRU database. Then, the proposed model is evaluated on actual vibration analysis (VA) and acoustic emission (AE) signals measured on a bearing test rig at low operating speeds (48-480 rpm). Overall, the analysis demonstrates that the ArSSAE model is able to perform an accurate diagnosis of bearing components under low-speed conditions

    A novel blade fault diagnosis using a deep learning model based on image and statistical analysis

    No full text
    Artificial intelligence technology has a high potential for machinery fault detection and diagnosis. Blade component failure is the main type of failure that usually occur in gas turbine and this component tends to fail unexpectedly. Detection and diagnosis of blade components are different with gear and bearing as both components have a standard vibration analysis and the fault can be examined using frequency domain analysis. Due to the complex structure of the blade system, the informative feature from the vibration signal on the blade fault often obscure with the noise signal. Therefore, this paper proposed a system using a combination of time–frequency image analysis and a stacked sparse autoencoder (SSAE) model to tackle the challenge of blade fault detection and diagnosis. The experiment is carried out using a multi-stage blade system and the result showed that proposed system is able to provide more than 90% diagnosis performance

    Differential evolution optimization for resilient stacked sparse autoencoder and its applications on bearing fault diagnosis

    No full text
    The rolling element bearing is an important component in most rotating machinery. The unexpected failure of a bearing may cause the whole mechanism to break down. Hence, research has focused on developing effective intelligent fault diagnosis to generate more accurate and robust diagnostic results. Bearing fault diagnosis based on stacked sparse autoencoder (SSAE) architecture is proposed in this study. SSAE is capable of providing a featureless methodology for bearing fault diagnosis. However, the architecture of SSAE is greatly influenced by its hyperparameter settings and there is no standard method of determining the optimal hyperparameter values. In addition, the standard learning algorithm used in SSAE architecture is time-intensive. In this paper, a method that combines differential evolution and a resilient back-propagation approach is proposed to improve the performance of SSAE networks in bearing fault classification. The differential evolution approach optimised SSAEs hyperparameters such as the hidden nodes number, weight decay parameter, sparsity parameter, and weight of the sparsity penalty term, that are associated with each hidden layer of SSAE networks. An increase in the hidden layers of SSAE will further complicate the hyperparameter selection process. The resilient back-propagation training algorithm is used to train the SSAE network due to its low computation cost. Results from analysis of three databases demonstrate that the proposed model achieved 99% performance accuracy in bearing fault diagnosis. The proposed model is found to be more user-friendly and effective in handling multi-condition of bearing faults compared to the original autoencoder

    Gearbox fault diagnosis using a deep learningmodel with limited data sample

    No full text
    Massive volumes of data are needed for deep learning (DL) models to provide accurate diagnosis results. Numerous studies of fault diagnosis systems have demonstrated the effectiveness of DL models over shallow machine learning (SL) in terms of feature extraction, feature dimensional reduction and diagnosis performance. Occasionally, during data acquisition, a problem with a sensor renders some of the data potentially unsuitable for further analysis, leaving only a small data sample. To compensate for this deficiency, a DL model based on a stacked sparse autoencoder (SSAE) model is designed to deal with limited sample data. In this article, the fault diagnosis system is developed based on time-frequency image pattern recognition. Therefore, two gearbox datasets are used to evaluate the proposed diagnosis system. The results from the experiments prove that the proposed system is capable of achieving high diagnostic accuracy even with limited sample data. The proposed fault diagnosis system achieved 100% and 99% diagnosis performance on experimental gearbox and wind turbine gearbox datasets, respectively. The proposed diagnosis system increased diagnosis performance between 10% and 20% over the standard SSAE model. In addition, the proposed model achieved higher diagnosis performance compared to deep neural network and convolutional neural networks models

    Ensemble Classifier for Recognition of Small Variation in X-Bar Control Chart Patterns

    No full text
    Manufacturing processes have become highly accurate and precise in recent years, particularly in the chemical, aerospace, and electronics industries. This has attracted researchers to investigate improved procedures for monitoring and detection of small process variations to remain in line with such advances. Among these techniques, statistical process controls (SPC), in particular the control chart pattern (CCP), have become a popular choice for monitoring process variance, being utilized in numerous industrial and manufacturing applications. This study provides an improved control chart pattern recognition (CCPR) method focusing on X-bar chart patterns of small process variations using an ensemble classifier comprised of five complementing algorithms: decision tree, artificial neural network, linear support vector machine, Gaussian support vector machine, and k-nearest neighbours. Before advancing to the classification step, Nelson’s Rus Rules were utilized as a monitoring rule to distinguish between stable and unstable processes. The study’s findings indicate that the proposed method improves classification performance for patterns with mean changes of less than 1.5 sigma, and confirm that the performance of the ensemble classifier is superior to that of the individual classifier. The ensemble classifier can distinguish unstable pattern types with a classification accuracy of 99.55% and an ARL1 of 11.94

    Automatic classification of rotating machinery defects using Machine Learning (ML) algorithms

    No full text
    Electric machines and motors have been the subject of enormous development. New concepts in design and control allow expanding their applications in different fields. The vast amount of data have been collected almost in any domain of interest. They can be static; that is to say, they represent real-world processes at a fixed point of time. Vibration analysis and vibration monitoring, including how to detect and monitor anomalies in vibration data are widely used techniques for predictive maintenance in high-speed rotating machines. However, accurately identifying the presence of a bearing fault can be challenging in practice, especially when the failure is still at its incipient stage, and the signal-to-noise ratio of the monitored signal is small. The main objective of this work is to design a system that will analyze the vibration signals of a rotating machine, based on recorded data from sensors, in the time/frequency domain. As a consequence of such substantial interest, there has been a dramatic increase of interest in applying Machine Learning (ML) algorithms to this task. An ML system will be used to classify and detect abnormal behavior and recognize the different levels of machine operation modes (normal, degraded, and faulty). The proposed solution can be deployed as predictive maintenance for Industry 4.0

    Comparative analysis of optimization algorithm on DSAE model for bearing fault diagnosis

    No full text
    A rolling-element bearing is one of the most vital components in machinery and maintaining the bearing health condition is very important. Intelligent fault detection and diagnosis based on deep sparse autoencoder (DSAE) is presented to improve the current maintenance strategy. The conventional maintenance strategy suffers from manual feature extraction and feature selection. In this project, the DSAE model made up of multiple layers of neural networks that can perform automated feature extraction and feature dimensional reduction is proposed. The DSAE model is used to extract the important features from the Fast Fourier Transform (FFT) images by learning the high-level feature from the unlabeled images. However, the DSAE model requires hyperparameter selection of which manual hand-tuning is time-intensive. The DSAE model contains four hidden layers and requires 12 hyperparameters selection. The hyperparameter is automatically selected using an optimization algorithm. The comparative study is conducted on three optimization algorithms, namely particle swarm optimization (PSO), grey wolf optimizer (GWO) and genetic algorithm (GA). The overall analysis result shows that the proposed model achieved 100% diagnosis accuracy. Furthermore, the proposed model is tested with a completely new dataset and the result indicated that the DSAE model achieved 93.5% accuracy for the new dataset. The grey-wolf optimizer optimized quicker compared to PSO and GA

    Gain scaling tuning of fuzzy logic Sugeno controller type for ride comfort suspension system using firefly algorithm

    No full text
    A control system based on fuzzy logic (FL) is one of the effective controllers which operates using an inference mechanism rule base that requires a knowledge database. The system itself can remotely able to produce good linguistic variables depending types of output required. Nevertheless, the FL controller design still has a drawback that requires an improvement to give a very high capability in controlling a dynamic ride comfort of the vehicle suspension system. This study aims to improve the FL controller design by adding a gain scaling value for each input and output of the FL system. A metaheuristic-based firefly algorithm (FA) is used to optimize the value of each input and output of the FL system. Taking an acceleration of the suspension system response as an objective function, the FA strategy is an attempt to find and search for an optimum value of the gains that able to be as a sort of contact information for improving the targeted value obtained from the FL controller. In this work, an external disturbance in the form of sinusoidal waves is applied to the system to verify the sensitivity and durability of the proposed control schemes. Consequently, a comparative assessment between FL controller without having gain scaling and with the gain scaling tuned by FL strategy is investigated an analysis in the form of the amplitude reduction for both body displacement and acceleration responses. Simulation results indicated that the FL with gain scaling shows a good response compared to the FL without gain and its performance is improved by up to 52.1% compared to others
    corecore